697 research outputs found

    On posterior distribution of Bayesian wavelet thresholding

    Full text link
    We investigate the posterior rate of convergence for wavelet shrinkage using a Bayesian approach in general Besov spaces. Instead of studying the Bayesian estimator related to a particular loss function, we focus on the posterior distribution itself from a nonparametric Bayesian asymptotics point of view and study its rate of convergence. We obtain the same rate as in \citet{abramovich04} where the authors studied the convergence of several Bayesian estimators

    Minimax Prediction for Functional Linear Regression with Functional Responses in Reproducing Kernel Hilbert Spaces

    Full text link
    In this article, we consider convergence rates in functional linear regression with functional responses, where the linear coefficient lies in a reproducing kernel Hilbert space (RKHS). Without assuming that the reproducing kernel and the covariate covariance kernel are aligned, or assuming polynomial rate of decay of the eigenvalues of the covariance kernel, convergence rates in prediction risk are established. The corresponding lower bound in rates is derived by reducing to the scalar response case. Simulation studies and two benchmark datasets are used to illustrate that the proposed approach can significantly outperform the functional PCA approach in prediction

    On rates of convergence for posterior distributions under misspecification

    Full text link
    We extend the approach of Walker (2003, 2004) to the case of misspecified models. A sufficient condition for establishing rates of convergence is given based on a key identity involving martingales, which does not require construction of tests. We also show roughly that the result obtained by using tests can also be obtained by our approach, which demonstrates the potential wider applicability of this method.Comment: 8 pages, no figure

    Flexible Shrinkage Estimation in High-Dimensional Varying Coefficient Models

    Full text link
    We consider the problem of simultaneous variable selection and constant coefficient identification in high-dimensional varying coefficient models based on B-spline basis expansion. Both objectives can be considered as some type of model selection problems and we show that they can be achieved by a double shrinkage strategy. We apply the adaptive group Lasso penalty in models involving a diverging number of covariates, which can be much larger than the sample size, but we assume the number of relevant variables is smaller than the sample size via model sparsity. Such so-called ultra-high dimensional settings are especially challenging in semiparametric models as we consider here and has not been dealt with before. Under suitable conditions, we show that consistency in terms of both variable selection and constant coefficient identification can be achieved, as well as the oracle property of the constant coefficients. Even in the case that the zero and constant coefficients are known a priori, our results appear to be new in that it reduces to semivarying coefficient models (a.k.a. partially linear varying coefficient models) with a diverging number of covariates. We also theoretically demonstrate the consistency of a semiparametric BIC-type criterion in this high-dimensional context, extending several previous results. The finite sample behavior of the estimator is evaluated by some Monte Carlo studies.Comment: 26 page

    Bayesian Nonlinear Principal Component Analysis Using Random Fields

    Full text link
    We propose a novel model for nonlinear dimension reduction motivated by the probabilistic formulation of principal component analysis. Nonlinearity is achieved by specifying different transformation matrices at different locations of the latent space and smoothing the transformation using a Markov random field type prior. The computation is made feasible by the recent advances in sampling from von Mises-Fisher distributions

    Cross Validation for Comparing Multiple Density Estimation Procedures

    Full text link
    We demonstrate the consistency of cross validation for comparing multiple density estimators using simple inequalities on the likelihood ratio. In nonparametric problems, the splitting of data does not require the domination of test data over the training/estimation data, contrary to Shao (1993). The result is complementary to that of Yang (2005) and Yang (2006)

    Posterior Convergence and Model Estimation in Bayesian Change-point Problems

    Full text link
    We study the posterior distribution of the Bayesian multiple change-point regression problem when the number and the locations of the change-points are unknown. While it is relatively easy to apply the general theory to obtain the O(1/n)O(1/\sqrt{n}) rate up to some logarithmic factor, showing the exact parametric rate of convergence of the posterior distribution requires additional work and assumptions. Additionally, we demonstrate the asymptotic normality of the segment levels under these assumptions. For inferences on the number of change-points, we show that the Bayesian approach can produce a consistent posterior estimate. Finally, we argue that the point-wise posterior convergence property as demonstrated might have bad finite sample performance in that consistent posterior for model selection necessarily implies the maximal squared risk will be asymptotically larger than the optimal O(1/n)O(1/\sqrt{n}) rate. This is the Bayesian version of the same phenomenon that has been noted and studied by other authors

    Shrinkage Tuning Parameter Selection in Precision Matrices Estimation

    Full text link
    Recent literature provides many computational and modeling approaches for covariance matrices estimation in a penalized Gaussian graphical models but relatively little study has been carried out on the choice of the tuning parameter. This paper tries to fill this gap by focusing on the problem of shrinkage parameter selection when estimating sparse precision matrices using the penalized likelihood approach. Previous approaches typically used K-fold cross-validation in this regard. In this paper, we first derived the generalized approximate cross-validation for tuning parameter selection which is not only a more computationally efficient alternative, but also achieves smaller error rate for model fitting compared to leave-one-out cross-validation. For consistency in the selection of nonzero entries in the precision matrix, we employ a Bayesian information criterion which provably can identify the nonzero conditional correlations in the Gaussian model. Our simulations demonstrate the general superiority of the two proposed selectors in comparison with leave-one-out cross-validation, ten-fold cross-validation and Akaike information criterion

    Empirical Likelihood Confidence Intervals for Nonparametric Functional Data Analysis

    Full text link
    We consider the problem of constructing confidence intervals for nonparametric functional data analysis using empirical likelihood. In this doubly infinite-dimensional context, we demonstrate the Wilks's phenomenon and propose a bias-corrected construction that requires neither undersmoothing nor direct bias estimation. We also extend our results to partially linear regression involving functional data. Our numerical results demonstrated the improved performance of empirical likelihood over approximation based on asymptotic normality

    A simple and efficient algorithm for fused lasso signal approximator with convex loss function

    Full text link
    We consider the augmented Lagrangian method (ALM) as a solver for the fused lasso signal approximator (FLSA) problem. The ALM is a dual method in which squares of the constraint functions are added as penalties to the Lagrangian. In order to apply this method to FLSA, two types of auxiliary variables are introduced to transform the original unconstrained minimization problem into a linearly constrained minimization problem. Each updating in this iterative algorithm consists of just a simple one-dimensional convex programming problem, with closed form solution in many cases. While the existing literature mostly focused on the quadratic loss function, our algorithm can be easily implemented for general convex loss. The most attractive feature of this algorithm is its simplicity in implementation compared to other existing fast solvers. We also provide some convergence analysis of the algorithm. Finally, the method is illustrated with some simulation datasets
    • …
    corecore